At China’s military parade in September 2025, tanks and marching troops were not the main attraction. Instead, the spotlight was on unmanned ground vehicles, underwater and aerial drones, and autonomous combat aircraft designed to operate alongside piloted fighter jets. Three researchers from Georgetown University’s Center for Security and Emerging Technology (CSET)—Sam Bresnick, Emelia S. Probasco, and Cole McFaul—have now explained what lies behind this display in an article published in Foreign Affairs.

The team analyzed thousands of publicly available PLA procurement notices from the past three years. According to their findings, the PLA is actively testing AI systems for unmanned combat vehicles, cyber defense, maritime tracking, target acquisition on land, at sea, and in space, as well as deepfake-enabled disinformation. The breadth of experimentation and the pace of testing are striking, the authors note.

Drone swarms, robotic dogs, and a distrustful command structure

China’s military modernization follows three overlapping phases, according to the researchers: mechanization; digital networking of platforms and sensors; and intelligent warfare—the use of AI to automate operations and decision-making. China has already made substantial progress in the first two phases. The procurement data now show that the PLA is aggressively pushing the third phase.

Specifically, the PLA is developing swarms of aerial drones capable of independently identifying, tracking, and coordinating attacks on targets. The documents also reference requirements for robotic dogs and humanoid robots. In space, the PLA is working on algorithms for counter-satellite operations and small robots designed to attach themselves to adversary satellites and disable them. Underwater, China is investing in autonomous vehicles and sensor networks aimed at long-term tracking of U.S. submarines worldwide.

The PLA is also pursuing AI-based decision-support systems. According to the authors, China’s political and military leadership distrusts its own command chain and fears being overwhelmed in a fast-moving conflict. AI systems are intended to anticipate adversary movements and compensate for the PLA’s limited combat experience. PLA officers and soldiers already use AI tools to simulate virtual battlefields and model enemy behavior.

Cognitive warfare: deepfakes as a weapon

In the information domain, PLA ambitions go beyond traditional cyber defense. Several procurement documents explicitly call for deepfake technologies. The military views AI-generated images, videos, and audio as effective tools for influencing public opinion and manipulating adversaries’ perception and decision-making during conflicts.

While U.S. initiatives in AI-assisted decision-making focus primarily on planning and force management—such as Palantir’s Maven Smart System or U.S. Indo-Pacific Command’s Joint Fires Network—the PLA is developing systems that monitor international media, identify political attitudes among foreign populations, predict social unrest, and actively manipulate adversary perceptions.

Rapid experimentation over waiting for breakthroughs

According to the researchers, Beijing is not waiting for technological breakthroughs. Instead, it is experimenting with what is already available, betting that incremental improvements will compound over time. Many of the analyzed documents specify short development timelines that enable rapid, relatively low-cost testing. Subsidies, tax incentives, and other benefits encourage civilian technology firms to adapt their products for military use. This civil-military integration allows the PLA to leverage China’s strengths in smart manufacturing, robotics, and battery technology.

Some of these efforts resemble U.S. programs such as the Pentagon’s Replicator Initiative or the CJADC2 concept. The authors describe an “iterative cycle of technological change” in which Washington and Beijing continuously respond to one another. The outcome of this competition will depend on which side can develop and scale new capabilities faster. Any technological advantage, they argue, may be hard-won but short-lived.

Over-automation as an escalation risk

The Georgetown researchers warn of a fundamental risk. While the U.S. military insists on “appropriate human judgment” exercised by experienced personnel, the PLA may be tempted to use AI decision systems as substitutes for its relatively weak and inexperienced officer corps. Overreliance on computer-generated analysis could lead to misinterpretation of military or diplomatic signals and flawed decisions.

Another danger lies in data dependency. Some AI decision systems rely on open-source information, which could incentivize militaries to manipulate the information environment—by flooding social media with false signals or disrupting commercial satellite imagery providers. Such actions, intended to deceive an adversary’s AI tools, could trigger unintended escalation.

At the same time, the path to intelligent warfare remains fraught with challenges. The war in Ukraine has demonstrated that developing autonomous drones is not the same as deploying them effectively on contested battlefields. The PLA has limited combat experience and lacks many of the datasets required for military AI training, such as classified imagery of military platforms or electromagnetic signatures of radar and weapons systems.

Washington caught in a dilemma

The authors argue that the United States finds itself in a paradoxical position. On one hand, the U.S. military still enjoys advantages in computing power, technical talent, and operational experience. On the other, Washington has recently designated AI company Anthropic as a supply-chain risk, effectively excluding a leading frontier AI provider from government collaboration. The researchers call this deeply concerning: national security, they argue, requires expanding such partnerships, not engaging in “dramatic public disputes.”

U.S. defense procurement has long moved at a “glacial pace.” Although the 2026 National Defense Authorization Act includes reforms and Defense Secretary Pete Hegseth has instructed the Pentagon to adopt a “wartime approach” toward internal blockers, procurement reform alone is insufficient. The Pentagon must build closer relationships with frontier AI labs—not only licensing technology, but integrating field engineers and data scientists. New standards are also needed for dealing with AI-enabled deception, along with diplomatic channels to establish norms for the responsible military use of AI.

China’s third phase of military modernization is now fully underway, the researchers conclude. Even if individual AI systems fail, rapid experimentation will accelerate learning and improvement. Beijing is positioning itself to keep the gap with the U.S. military as narrow as possible.